44 research outputs found
Covariance-domain Dictionary Learning for Overcomplete EEG Source Identification
We propose an algorithm targeting the identification of more sources than
channels for electroencephalography (EEG). Our overcomplete source
identification algorithm, Cov-DL, leverages dictionary learning methods applied
in the covariance-domain. Assuming that EEG sources are uncorrelated within
moving time-windows and the scalp mixing is linear, the forward problem can be
transferred to the covariance domain which has higher dimensionality than the
original EEG channel domain. This allows for learning the overcomplete mixing
matrix that generates the scalp EEG even when there may be more sources than
sensors active at any time segment, i.e. when there are non-sparse sources.
This is contrary to straight-forward dictionary learning methods that are based
on the assumption of sparsity, which is not a satisfied condition in the case
of low-density EEG systems. We present two different learning strategies for
Cov-DL, determined by the size of the target mixing matrix. We demonstrate that
Cov-DL outperforms existing overcomplete ICA algorithms under various scenarios
of EEG simulations and real EEG experiments
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
Learning Global Direct Inverse Kinematics
We introduce and demonstrate a bootstrap method for construction of an inverse function for the robot kinematic mapping using only sample configuration-- space/workspace data. Unsupervised learning (clustering) techniques are used on pre--image neighborhoods in order to learn to partition the configuration space into subsets over which the kinematic mapping is invertible. Supervised learning is then used separately on each of the partitions to approximate the inverse function. The ill--posed inverse kinematics function is thereby regularized, and a global inverse kinematics solution for the wristless Puma manipulator is developed. 1 INTRODUCTION The robot forward kinematics function is a continuous mapping f : C ` Q n !W ` X m which maps a set of n joint parameters from the configuration space, C, to the m-- dimensional task space, W. If m n, the robot has redundant degrees--of--freedom (dof's). In general, control objectives such as the positioning and orienting of the end-- ..
Global Topological Structure of the Kinematics of Redundant Manipulators
The nonlinear inverse kinematics problem for a redundant manipulator is difficult to solve. The kinematics of a redundant manipulator generically partitions into configuration space regions which are fiber bundles, where the fibers are the "self--motion manifolds" along which the manipulator can change configuration while keeping the end--effector at a fixed location. Under most circumstances, these are trivial fiber bundles; thus a canonical, or natural, parameterization of the self motion manifolds exists. As a result, the image of a bundle in the workspace is invertible by an inverse function, parameterized by the natural representation of the excess dof. 1 Introduction The forward kinematics function of a manipulator, x = f(`) (1) maps a set of joint values ` 2 \Theta, a configuration, to an end--effector location in the reachable workspace, x 2 W = f (\Theta). It is assumed that the dimensionality of the configuration space \Theta exceeds that of W , in which case the manipulator..
Global Regularization of Inverse Kinematics for Redundant Manipulators
The inverse kinematics problem for redundant manipulators is ill--posed and nonlinear. There are two fundamentally different issues which result in the need for some form of regularization; the existence of multiple solution branches (global ill--posedness) and the existence of excess degrees of freedom (local ill-- posedness). For certain classes of manipulators, learning methods applied to input--output data generated from the forward function can be used to globally regularize the problem by partitioning the domain of the forward mapping into a finite set of regions over which the inverse problem is well--posed. Local regularization can be accomplished by an appropriate parameterization of the redundancy consistently over each region. As a result, the ill--posed problem can be transformed into a finite set of well--posed problems. Each can then be solved separately to construct approximate direct inverse functions. 1 INTRODUCTION The robot forward kinematics function maps a vector ..
Canonical Parameterization of Excess Motor Degrees of Freedom with Self-Organizing Maps
The problem of sensorimotor control is underdetermined due to excess (or "redundant") degrees of freedom when there are more joint variables than the minimum needed for positioning an end--effector. A method is presented for solving the nonlinear inverse kinematics problem for a redundant manipulator by learning a natural parameterization of the inverse solution manifolds with self--organizing maps. The parameterization approximates the topological structure of the joint space, which is that of a fiber bundle. The fibers represent the "self--motion manifolds" along which the manipulator can change configuration while keeping the end--effector at a fixed location. The method is demonstrated for the case of the redundant planar manipulator. Data samples along the self--motion manifolds are selected from a large set of measured input--output data. This is done by taking points in the joint space corresponding to end-effector locations near "query points", which define small neighborhoods ..